134 research outputs found
PDANet: Pyramid Density-aware Attention Net for Accurate Crowd Counting
Crowd counting, i.e., estimating the number of people in a crowded area, has
attracted much interest in the research community. Although many attempts have
been reported, crowd counting remains an open real-world problem due to the
vast scale variations in crowd density within the interested area, and severe
occlusion among the crowd. In this paper, we propose a novel Pyramid
Density-Aware Attention-based network, abbreviated as PDANet, that leverages
the attention, pyramid scale feature and two branch decoder modules for
density-aware crowd counting. The PDANet utilizes these modules to extract
different scale features, focus on the relevant information, and suppress the
misleading ones. We also address the variation of crowdedness levels among
different images with an exclusive Density-Aware Decoder (DAD). For this
purpose, a classifier evaluates the density level of the input features and
then passes them to the corresponding high and low crowded DAD modules.
Finally, we generate an overall density map by considering the summation of low
and high crowded density maps as spatial attention. Meanwhile, we employ two
losses to create a precise density map for the input scene. Extensive
evaluations conducted on the challenging benchmark datasets well demonstrate
the superior performance of the proposed PDANet in terms of the accuracy of
counting and generated density maps over the well-known state of the arts
Point Clouds Are Specialized Images: A Knowledge Transfer Approach for 3D Understanding
Self-supervised representation learning (SSRL) has gained increasing
attention in point cloud understanding, in addressing the challenges posed by
3D data scarcity and high annotation costs. This paper presents PCExpert, a
novel SSRL approach that reinterprets point clouds as "specialized images".
This conceptual shift allows PCExpert to leverage knowledge derived from
large-scale image modality in a more direct and deeper manner, via extensively
sharing the parameters with a pre-trained image encoder in a multi-way
Transformer architecture. The parameter sharing strategy, combined with a novel
pretext task for pre-training, i.e., transformation estimation, empowers
PCExpert to outperform the state of the arts in a variety of tasks, with a
remarkable reduction in the number of trainable parameters. Notably, PCExpert's
performance under LINEAR fine-tuning (e.g., yielding a 90.02% overall accuracy
on ScanObjectNN) has already approached the results obtained with FULL model
fine-tuning (92.66%), demonstrating its effective and robust representation
capability
- …